Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Two common definitions of the spatially local rate of kinetic energy cascade at some scale$$\ell$$in turbulent flows are (i) the cubic velocity difference term appearing in the ‘scale-integrated local Kolmogorov–Hill’ equation (structure-function approach), and (ii) the subfilter-scale energy flux term in the transport equation for subgrid-scale kinetic energy (filtering approach). We perform a comparative study of both quantities based on direct numerical simulation data of isotropic turbulence at Taylor-scale Reynolds number 1250. While in the past observations of negative subfilter-scale energy flux (backscatter) have led to debates regarding interpretation and relevance of such observations, we argue that the interpretation of the local structure-function-based cascade rate definition is unambiguous since it arises from a divergence term in scale space. Conditional averaging is used to explore the relationship between the local cascade rate and the local filtered viscous dissipation rate as well as filtered velocity gradient tensor properties such as its invariants. We find statistically robust evidence of inverse cascade when both the large-scale rotation rate is strong and the large-scale strain rate is weak. Even stronger net inverse cascading is observed in the ‘vortex compression’$$R>0$$,$$Q>0$$quadrant, where$$R$$and$$Q$$are velocity gradient invariants. Qualitatively similar but quantitatively much weaker trends are observed for the conditionally averaged subfilter-scale energy flux. Flow visualizations show consistent trends, namely that spatially, the inverse cascade events appear to be located within large-scale vortices, specifically in subregions when$$R$$is large.more » « less
-
The brain is arguably the most powerful computation system known. It is extremely efficient in processing large amounts of information and can discern signals from noise, adapt, and filter faulty information all while running on only 20 watts of power. The human brain's processing efficiency, progressive learning, and plasticity are unmatched by any computer system. Recent advances in stem cell technology have elevated the field of cell culture to higher levels of complexity, such as the development of three-dimensional (3D) brain organoids that recapitulate human brain functionality better than traditional monolayer cell systems. Organoid Intelligence (OI) aims to harness the innate biological capabilities of brain organoids for biocomputing and synthetic intelligence by interfacing them with computer technology. With the latest strides in stem cell technology, bioengineering, and machine learning, we can explore the ability of brain organoids to compute, and store given information (input), execute a task (output), and study how this affects the structural and functional connections in the organoids themselves. Furthermore, understanding how learning generates and changes patterns of connectivity in organoids can shed light on the early stages of cognition in the human brain. Investigating and understanding these concepts is an enormous, multidisciplinary endeavor that necessitates the engagement of both the scientific community and the public. Thus, on Feb 22–24 of 2022, the Johns Hopkins University held the first Organoid Intelligence Workshop to form an OI Community and to lay out the groundwork for the establishment of OI as a new scientific discipline. The potential of OI to revolutionize computing, neurological research, and drug development was discussed, along with a vision and roadmap for its development over the coming decade.more » « less
An official website of the United States government
